10. Process Covariance Matrix
Process Covariance Matrix
Calculating Acceleration Noise Parameters
Before we discuss the derivation of the process covariance matrix Q, you might be curious about how to choose values for \sigma_{ax_{}}^2 and \sigma_{ay}^2 .
For the extended Kalman filter project, you will be given values for both.
Process Covariance Matrix Q - Intuition
As a reminder, here are the state covariance matrix update equation and the equation for Q.
P' = FPF^T + Q
Q = \begin{pmatrix} \frac{\Delta t^4}{{4}}\sigma_{ax}^2 & 0 & \frac{\Delta t^3}{{2}}\sigma_{ax}^2 & 0 \\ 0 & \frac{\Delta t^4}{{4}}\sigma_{ay}^2 & 0 & \frac{\Delta t^3}{{2}}\sigma_{ay}^2 \\ \frac{\Delta t^3}{{2}}\sigma_{ax}^2 & 0 & \Delta t^2\sigma_{ax}^2 & 0 \\ 0 & \frac{\Delta t^3}{{2}}\sigma_{ay}^2 & 0 & \Delta t^2\sigma_{ay}^2 \end{pmatrix}
Because our state vector only tracks position and velocity, we are modeling acceleration as a random noise. The Q matrix includes time \Delta t to account for the fact that as more time passes, we become more uncertain about our position and velocity. So as \Delta t increases, we add more uncertainty to the state covariance matrix P .
Combining both 2D position and 2D velocity equations previously deducted formulas we have:
Since the acceleration is unknown we can add it to the noise component, and this random noise would be expressed analytically as the last terms in the equation derived above. So, we have a random acceleration vector \nu in this form:
which is described by a zero mean and a covariance matrix Q , so \nu \sim N(0,Q) .
The vector \nu can be decomposed into two components a 4 by 2 matrix G which does not contain random variables and a 2 by 1 matrix a which contains the random acceleration components:
\Delta t is computed at each Kalman Filter step and the acceleration is a random vector with zero mean and standard deviations \sigma_{ax_{}} and \sigma_{ay} .
Based on our noise vector we can define now the new covariance matrix
Q
.
The covariance matrix is defined as the expectation value of the noise vector
\nu
times the noise vector
\nu^T
. So let’s write this down:
As G does not contain random variables, we can put it outside the expectation calculation.
This leaves us with three statistical moments:
- the expectation of ax times ax, which is the variance of ax squared: \sigma_{ax}^2 .
- the expectation of ay times ay, which is the variance of ay squared: \sigma_{ay}^2 .
- and the expectation of ax times ay, which is the covariance of ax and ay : \sigma_{axy} .
a_{x_{}} and a_{y_{}} are assumed uncorrelated noise processes. This means that the covariance \sigma_{axy_{}} in Q_{\nu} is zero:
So after combining everything in one matrix we obtain our 4 by 4 Q matrix:
Note on Notation
Some authors describe Q as the complete process noise covariance matrix. And some authors describe Q as the covariance matrix of the individual noise processes. In our case, the covariance matrix of the individual noise processes matrix is called Q_\nu , which is something to be aware of.